225 research outputs found

    A Question of Justice: The WTO, Africa, And Countermeasures For Breaches of International Trade, 38 J. Marshall L. Rev. 1153 (2005)

    Get PDF
    Background/aims To evaluate the perception of three-dimensional (3D) shape in patients with strabismus and the contributions of stereopsis and monocular cues to this perception. Methods Twenty-one patients with strabismus with and 20 without stereo acuity as well as 25 age-matched normal volunteers performed two tasks: (1) identifying the closest vertices of 3D shapes from monocular shading (3D-SfS), texture (3D-SfT) or motion cues (3D-SfM) and from binocular disparity (3D-SfD), (2) discriminating 1D elementary features of these cues. Results Discrimination of the elementary features of luminance, texture and motion did not differ across groups. When the distances between reported and actual closest vertices were resolved into sagittal and frontoparallel plane components, sagittal components in 3D-SfS and frontoparallel components in 3D-SfT indicated larger errors in patients with strabismus without stereo acuity than in normal subjects. These patients could not discriminate one-dimensional elementary features of binocular disparity. Patients with strabismus with stereo acuity performed worse for both components of 3D-SfD and frontoparallel components of 3D-SfT compared with normal subjects. No differences were observed in the perception of 3D-SfM across groups. A comparison between normal subjects and patients with strabismus with normal stereopsis revealed no deficit in 3D shape perception from any cue. Conclusions Binocular stereopsis is essential for fine perception of 3D shape, even when 3D shape is defined by monocular static cues. Interaction between these cues may occur in ventral occipitotemporal regions, where 3D-SfS, 3D-SfT and 3D-SfD are processed in the same or neighbouring cortical regions. Our findings demonstrate the perceptual benefit of binocular stereopsis in patients with strabismus

    Grasp-specific motor resonance is influenced by the visibility of the observed actor

    Get PDF
    Motor resonance is the modulation of M1 corticospinal excitability induced by observation of others' actions. Recent brain imaging studies have revealed that viewing videos of grasping actions led to a differential activation of the ventral premotor cortex depending on whether the entire person is viewed versus only their disembodied hand. Here we used transcranial magnetic stimulation (TMS) to examine motor evoked potentials (MEPs) in the first dorsal interosseous (FDI) and abductor digiti minimi (ADM) during observation of videos or static images in which a whole person or merely the hand was seen reaching and grasping a peanut (precision grip) or an apple (whole hand grasp). Participants were presented with six visual conditions in which visual stimuli (video vs static image), view (whole person vs hand) and grasp (precision grip vs whole hand grasp) were varied in a 2 × 2 × 2 factorial design. Observing videos, but not static images, of a hand grasping different objects resulted in a grasp-specific interaction, such that FDI and ADM MEPs were differentially modulated depending on the type of grasp being observed (precision grip vs whole hand grasp). This interaction was present when observing the hand acting, but not when observing the whole person acting. Additional experiments revealed that these results were unlikely to be due to the relative size of the hand being observed. Our results suggest that observation of videos rather than static images is critical for motor resonance. Importantly, observing the whole person performing the action abolished the grasp-specific effect, which could be due to a variety of PMv inputs converging on M1

    Sparse Coding Predicts Optic Flow Specificities of Zebrafish Pretectal Neurons

    Full text link
    Zebrafish pretectal neurons exhibit specificities for large-field optic flow patterns associated with rotatory or translatory body motion. We investigate the hypothesis that these specificities reflect the input statistics of natural optic flow. Realistic motion sequences were generated using computer graphics simulating self-motion in an underwater scene. Local retinal motion was estimated with a motion detector and encoded in four populations of directionally tuned retinal ganglion cells, represented as two signed input variables. This activity was then used as input into one of two learning networks: a sparse coding network (competitive learning) and backpropagation network (supervised learning). Both simulations develop specificities for optic flow which are comparable to those found in a neurophysiological study (Kubo et al. 2014), and relative frequencies of the various neuronal responses are best modeled by the sparse coding approach. We conclude that the optic flow neurons in the zebrafish pretectum do reflect the optic flow statistics. The predicted vectorial receptive fields show typical optic flow fields but also "Gabor" and dipole-shaped patterns that likely reflect difference fields needed for reconstruction by linear superposition.Comment: Published Conference Paper from ICANN 2018, Rhode

    Optic Flow Stimuli in and Near the Visual Field Centre: A Group fMRI Study of Motion Sensitive Regions

    Get PDF
    Motion stimuli in one visual hemifield activate human primary visual areas of the contralateral side, but suppress activity of the corresponding ipsilateral regions. While hemifield motion is rare in everyday life, motion in both hemifields occurs regularly whenever we move. Consequently, during motion primary visual regions should simultaneously receive excitatory and inhibitory inputs. A comparison of primary and higher visual cortex activations induced by bilateral and unilateral motion stimuli is missing up to now. Many motion studies focused on the MT+ complex in the parieto-occipito-temporal cortex. In single human subjects MT+ has been subdivided in area MT, which was activated by motion stimuli in the contralateral visual field, and area MST, which responded to motion in both the contra- and ipsilateral field. In this study we investigated the cortical activation when excitatory and inhibitory inputs interfere with each other in primary visual regions and we present for the first time group results of the MT+ subregions, allowing for comparisons with the group results of other motion processing studies. Using functional magnetic resonance imaging (fMRI), we investigated whole brain activations in a large group of healthy humans by applying optic flow stimuli in and near the visual field centre and performed a second level analysis. Primary visual areas were activated exclusively by motion in the contralateral field but to our surprise not by central flow fields. Inhibitory inputs to primary visual regions appear to cancel simultaneously occurring excitatory inputs during central flow field stimulation. Within MT+ we identified two subregions. Putative area MST (pMST) was activated by ipsi- and contralateral stimulation and located in the anterior part of MT+. The second subregion was located in the more posterior part of MT+ (putative area MT, pMT)

    Cluster Lenses

    Get PDF
    Clusters of galaxies are the most recently assembled, massive, bound structures in the Universe. As predicted by General Relativity, given their masses, clusters strongly deform space-time in their vicinity. Clusters act as some of the most powerful gravitational lenses in the Universe. Light rays traversing through clusters from distant sources are hence deflected, and the resulting images of these distant objects therefore appear distorted and magnified. Lensing by clusters occurs in two regimes, each with unique observational signatures. The strong lensing regime is characterized by effects readily seen by eye, namely, the production of giant arcs, multiple-images, and arclets. The weak lensing regime is characterized by small deformations in the shapes of background galaxies only detectable statistically. Cluster lenses have been exploited successfully to address several important current questions in cosmology: (i) the study of the lens(es) - understanding cluster mass distributions and issues pertaining to cluster formation and evolution, as well as constraining the nature of dark matter; (ii) the study of the lensed objects - probing the properties of the background lensed galaxy population - which is statistically at higher redshifts and of lower intrinsic luminosity thus enabling the probing of galaxy formation at the earliest times right up to the Dark Ages; and (iii) the study of the geometry of the Universe - as the strength of lensing depends on the ratios of angular diameter distances between the lens, source and observer, lens deflections are sensitive to the value of cosmological parameters and offer a powerful geometric tool to probe Dark Energy. In this review, we present the basics of cluster lensing and provide a current status report of the field.Comment: About 120 pages - Published in Open Access at: http://www.springerlink.com/content/j183018170485723/ . arXiv admin note: text overlap with arXiv:astro-ph/0504478 and arXiv:1003.3674 by other author

    Reprogramming of orientation columns in visual cortex : a domino effect

    Get PDF
    Abstract : Cortical organization rests upon the fundamental principle that neurons sharing similar properties are co-located. In the visual cortex, neurons are organized into orientation columns. In a column, most neurons respond optimally to the same axis of an oriented edge, that is, the preferred orientation. This orientation selectivity is believed to be absolute in adulthood. However, in a fully mature brain, it has been established that neurons change their selectivity following sensory experience or visual adaptation. Here, we show that after applying an adapter away from the tested cells, neurons whose receptive fields were located remotely from the adapted site also exhibit a novel selectivity in spite of the fact that they were not adapted. These results indicate a robust reconfiguration and remapping of the orientation domains with respect to each other thus removing the possibility of an orientation hole in the new hypercolumn. These data suggest that orientation columns transcend anatomy, and are almost strictly functionally dynamic

    EEG Correlates of Attentional Load during Multiple Object Tracking

    Get PDF
    While human subjects tracked a subset of ten identical, randomly-moving objects, event-related potentials (ERPs) were evoked at parieto-occipital sites by task-irrelevant flashes that were superimposed on either tracked (Target) or non-tracked (Distractor) objects. With ERPs as markers of attention, we investigated how allocation of attention varied with tracking load, that is, with the number of objects that were tracked. Flashes on Target discs elicited stronger ERPs than did flashes on Distractor discs; ERP amplitude (0–250 ms) decreased monotonically as load increased from two to three to four (of ten) discs. Amplitude decreased more rapidly for Target discs than Distractor discs. As a result, with increasing tracking loads, the difference between ERPs to Targets and Distractors diminished. This change in ERP amplitudes with load accords well with behavioral performance, suggesting that successful tracking depends upon the relationship between the neural signals associated with attended and non-attended objects

    Haptic Edge Detection Through Shear

    Get PDF
    Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals

    An Empirical Explanation of the Speed-Distance Effect

    Get PDF
    Understanding motion perception continues to be the subject of much debate, a central challenge being to account for why the speeds and directions seen accord with neither the physical movements of objects nor their projected movements on the retina. Here we investigate the varied perceptions of speed that occur when stimuli moving across the retina traverse different projected distances (the speed-distance effect). By analyzing a database of moving objects projected onto an image plane we show that this phenomenology can be quantitatively accounted for by the frequency of occurrence of image speeds generated by perspective transformation. These results indicate that speed-distance effects are determined empirically from accumulated past experience with the relationship between image speeds and moving objects

    Bayesian Modeling of Perceived Surface Slant from Actively-Generated and Passively-Observed Optic Flow

    Get PDF
    We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observer's motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information
    corecore